fight covid-19 risk
Use of AI to fight COVID-19 risks harming "disadvantaged groups", experts warn
Rapid deployment of artificial intelligence and machine learning to tackle coronavirus must still go through ethical checks and balances, or we risk harming already disadvantaged communities in the rush to defeat the disease. This is according to researchers at the University of Cambridge's Leverhulme Centre for the Future of Intelligence (CFI) in two articles published in the British Medical Journal, cautioning against blinkered use of AI for data-gathering and medical decision-making as we fight to regain normalcy in 2021. "Relaxing ethical requirements in a crisis could have unintended harmful consequences that last well beyond the life of the pandemic," said Dr Stephen Cave, Director of CFI and lead author of one of the articles. "The sudden introduction of complex and opaque AI, automating judgments once made by humans and sucking in personal information, could undermine the health of disadvantaged groups as well as long-term public trust in technology." In a further paper, co-authored by CFI's Dr Alexa Hagerty, researchers highlight potential consequences arising from the AI now making clinical choices at scale – predicting deterioration rates of patients who might need ventilation, for example – if it does so based on biased data.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.25)
- Asia > India (0.05)